Search Results: "Ed J"

5 April 2022

Matthew Garrett: Bearer tokens are just awful

As I mentioned last time, bearer tokens are not super compatible with a model in which every access is verified to ensure it's coming from a trusted device. Let's talk about that in a bit more detail.

First off, what is a bearer token? In its simplest form, it's simply an opaque blob that you give to a user after an authentication or authorisation challenge, and then they show it to you to prove that they should be allowed access to a resource. In theory you could just hand someone a randomly generated blob, but then you'd need to keep track of which blobs you've issued and when they should be expired and who they correspond to, so frequently this is actually done using JWTs which contain some base64 encoded JSON that describes the user and group membership and so on and then have a signature associated with them so whenever the user presents one you can just validate the signature and then assume that the contents of the JSON are trustworthy.

One thing to note here is that the crypto is purely between whoever issued the token and whoever validates the token - as far as the server is concerned, any client who can just show it the token is just fine as long as the signature is verified. There's no way to verify the client's state, so one of the core ideas of Zero Trust (that we verify that the client is in a trustworthy state on every access) is already violated.

Can we make things not terrible? Sure! We may not be able to validate the client state on every access, but we can validate the client state when we issue the token in the first place. When the user hits a login page, we do state validation according to whatever policy we want to enforce, and if the client violates that policy we refuse to issue a token to it. If the token has a sufficiently short lifetime then an attacker is only going to have a short period of time to use that token before it expires and then (with luck) they won't be able to get a new one because the state validation will fail.

Except! This is fine for cases where we control the issuance flow. What if we have a scenario where a third party authenticates the client (by verifying that they have a valid token issued by their ID provider) and then uses that to issue their own token that's much longer lived? Well, now the client has a long-lived token sitting on it. And if anyone copies that token to another device, they can now pretend to be that client.

This is, sadly, depressingly common. A lot of services will verify the user, and then issue an oauth token that'll expire some time around the heat death of the universe. If a client system is compromised and an attacker just copies that token to another system, they can continue to pretend to be the legitimate user until someone notices (which, depending on whether or not the service in question has any sort of audit logs, and whether you're paying any attention to them, may be once screenshots of your data show up on Twitter).

This is a problem! There's no way to fit a hosted service that behaves this way into a Zero Trust model - the best you can say is that a token was issued to a device that was, around that time, apparently trustworthy, and now it's some time later and you have literally no idea whether the device is still trustworthy or if the token is still even on that device.

But wait, there's more! Even if you're nowhere near doing any sort of Zero Trust stuff, imagine the case of a user having a bunch of tokens from multiple services on their laptop, and then they leave their laptop unlocked in a cafe while they head to the toilet and whoops it's not there any more, better assume that someone has access to all the data on there. How many services has our opportunistic new laptop owner gained access to as a result? How do we revoke all of the tokens that are sitting there on the local disk? Do you even have a policy for dealing with that?

There isn't a simple answer to all of these problems. Replacing bearer tokens with some sort of asymmetric cryptographic challenge to the client would at least let us tie the tokens to a TPM or other secure enclave, and then we wouldn't have to worry about them being copied elsewhere. But that wouldn't help us if the client is compromised and the attacker simply keeps using the compromised client. The entire model of simply proving knowledge of a secret being sufficient to gain access to a resource is inherently incompatible with a desire for fine-grained trust verification on every access, but I don't see anything changing until we have a standard for third party services to be able to perform that trust verification against a customer's policy.

Still, at least this means I can just run weird Android IoT apps through mitmproxy, pull the bearer token out of the request headers and then start poking the remote API with curl. It may all be broken, but it's also got me a bunch of bug bounty credit, so, it;s impossible to say if its bad or not,

(Addendum: this suggestion that we solve the hardware binding problem by simply passing all the network traffic through some sort of local enclave that could see tokens being set and would then sequester them and reinject them into later requests is OBVIOUSLY HORRIFYING and is also probably going to be at least three startup pitches by the end of next week)

comment count unavailable comments

9 March 2022

Jonathan Dowland: Broken webcam aspect ratio

picture of my Sony RX100-III camera Sony RX100-III, relegated to a webcam
Sometimes I have remote meetings with Google Meet. Unlike the other video-conferencing services that I use (Bluejeans, Zoom), my video was stretched out of proportion under Google Meet with Firefox. I haven't found out why this was happening, but I did figure out a work-around. Thanks to Daniel Silverstone, Rob Kendrick, Gregor Herrmann and Ben Allen for pointing me in the right direction! Hardware The lovely Sony RX-100 mk3 that I bought in 2015 has spent most of its life languishing unused. During the Pandemic, once I was working from home all the time, I decided to press-gang it into service as a better-quality webcam. Newer models of this camera the mark 4 onwards have support for a USB mode called "PC Remote", which effectively makes them into webcams. Unfortunately my mark 3 does not support this, but it does have HDMI out, so I picked up a cheap "HDMI to USB Video Capture Card" from eBay. Video modes
Before: wrong aspect ratio Before: wrong aspect ratio
This device offers a selection of different video modes over a webcam interface. I used qv4l2 to explore the different modes. It became clear that the camera was outputting a signal at 16:9, but the modes on offer from the dongle were for a range of different aspect ratios. The picture for these other ratios was not letter or pillar-boxed, but stretched to fit. I also noticed that the modes which had the correct aspect ratio were at very low framerates: 1920x1080@5fps, 1360x768@8fps, 1280x720@10fps. It felt to me that I would look unnatural at such a low framerate. The most promising mode was close to the right ratio, 720x480 and 30 fps. Software
After: corrected aspect ratio After: corrected aspect ratio
My initial solution is to use the v4l2loopback kernel module, which provides a virtual loop-back webcam interface. I can write video data to it from one process, and read it back from another. Loading it as follows:
modprobe v4l2loopback exclusive_caps=1
The option exclusive_caps configures the module into a mode where it initially presents a write-only interface, but once a process has opened a file handle, it then switches to read-only for subsequent processes. Assuming there are no other camera devices connected at the time of loading the module, it will create /dev/video0.1 I experimented briefly with OBS Studio, the very versatile and feature-full streaming tool, which confirmed that I could use filters on the source video to fix the aspect ratio, and emit the result to the virtual device. I don't otherwise use OBS, though, so I achieve the same result using ffmpeg:
fmpeg -s 720x480 -i /dev/video1 -r 30 -f v4l2 -vcodec rawvideo \
    -pix_fmt yuyv422 -s 720x405 /dev/video0
The source options are to select the source video mode I want. The codec and pixel formats are to match what is being emitted (I determined that using ffprobe on the camera device). The resizing is triggered by supplying a different size to the -s parameter. I think that is equivalent to explicitly selecting a "scale" filter, and there might be other filters that could be used instead (to add pillar boxes for example). This worked just as well. In Google Meet, I select the Virtual Camera, and Google Meet is presented with only one video mode, in the correct aspect ratio, and no configurable options for it, so it can't misbehave. Future I'm planning to automate the loading (and unloading) of the module and starting the ffmpeg process in response to the real camera device being plugged or unplugged, using systemd events and services. (I don't leave the camera plugged in all the time due to some bad USB behaviour I've experienced if I do so.) If I get that working, I will write a follow-up.

  1. you can request a specific device name/number with another module option.

26 February 2022

Daniel Silverstone: Subplot and FOSDEM 2022 talk

As many of you may be aware, I work with Lars Wirzenius on a project we call Subplot which is a tool for writing documentation which helps all stakeholders involved with a proejct to understand how the project meets its requirements. At the start of February we had FOSDEM which was once again online, and I decided to give a talk in the Safety and open source devroom to introduce the concepts of safety argumentation and to bring some attention to how I feel that Subplot could be used in that arena. You can view the talk on the FOSDEM website at some point in the future when they manage to finish transcoding all the amazing talks from the weekend, or if you are more impatient, on Youtube, whichever you prefer. If, after watching the talk, or indeed just reading about Subplot on our website, you are interested in learning more about Subplot, or talking with us about how it might fit into your development flow, then you can find Lars and myself in the Subplot Matrix Room or else on any number of IRC networks where I hang around as kinnison.

25 February 2022

Dirk Eddelbuettel: Rcpp now used by 2500 CRAN packages!

2500 Rcpp packages As of this morning, Rcpp stands at 2501 reverse-dependencies on CRAN. The graph on the left depicts the growth of Rcpp usage (as measured by Depends, Imports and LinkingTo, but excluding Suggests) over time. Rcpp was first released in November 2008. It probably cleared 50 packages around three years later in December 2011, 100 packages in January 2013, 200 packages in April 2014, and 300 packages in November 2014. It passed 400 packages in June 2015 (when I tweeted about it), 500 packages in late October 2015, 600 packages in March 2016, 700 packages in July 2016, 800 packages in October 2016, 900 packages early January 2017, 1000 packages in April 2017, 1250 packages in November 2017, 1500 packages in November 2018, 1750 packages in August 2019, 2000 packages in July 2020, and 2250 package in March of last year. The chart extends to the very beginning via manually compiled data from CRANberries and checked with crandb. The next part uses manually saved entries. The core (and by far largest) part of the data set was generated semi-automatically via a short script appending updates to a small file-based backend. A (manually curated) list of packages using Rcpp is available too. Also displayed in the graph is the relative proportion of CRAN packages using Rcpp. The four per-cent hurdle was cleared just before useR! 2014 where I showed a similar graph (as two distinct graphs) in my invited talk. We passed five percent in December of 2014, six percent July of 2015, seven percent just before Christmas 2015, eight percent in the summer of 2016, nine percent mid-December 2016, cracked ten percent in the summer of 2017, eleven percent in 2018 and passed 12.5 percent or one in every eight CRAN packages dependens on Rcpp along with the 2000 packages mark. Truly stunning. As before, there is more detail in the chart: how CRAN seems to be pushing back more and removing more aggressively (which my CRANberries tracks but not in as much detail as it could), how the growth of Rcpp seems to be slowing somewhat outright and even more so as a proportion of CRAN as one would expect a growth curve to. The Rcpp team continues to aim for keeping Rcpp as performant and reliable as it has been (and see e.g. here for some more details). A really big shoutout and Thank You! to all users and contributors of Rcpp for help, suggestions, bug reports, documentation or, of course, code. If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

24 February 2022

Dirk Eddelbuettel: #36: pub/sub for live market monitoring with R and Redis

Welcome to the 36th post of the really randomly reverberating R, or R4 for short, write-ups. Today s post is about using Redis, and especially RcppRedis, for live or (near) real-time monitoring with R. market monitor There is an saying that you can take the boy out of the valley, but you cannot the valley out of the boy so for those of us who spent a decade or two in finance and on trading floors, having some market price information available becomes second nature. And/or sometimes it is just good fun to program this. A good while back Josh posted a gist on a simple-yet-robust while loop. It (very cleverly) uses his quantmod package to access the SP500 in real-time . (I use quotes here because at the end of retail broadband one is not at the same market action as someone co-located in a New Jersey data center. It is however not delayed: as an index, it is not immediately tradeable as a stock, etf, or derivative may be all of which are only disseminated as delayed price information, usually by ten minutes.) I quite enjoyed the gist and used it and started tinkering with it. For example, it collects data but only saves (i.e. persists ) it after market close. If for whatever reason one needs to restart recent history is gone. In any event, I used his code and generalized it a little and published this about a year ago as function intradayMarketMonitor() in my dang package. (See this blog post announcing it.) The chart of the left shows this in action, the chart is a snapshot from a couple of days ago when the vignettes (more on them below) were written. As lovely as intradayMarketMonitor() is, it also limits itself to market hours. And sometimes you want to see, say, how the market opens on Sunday (futures usually restart at 17h Chicago time), or how news dissipates during the night, or where markets are pre-open, or . So I both wanted to complement this with futures, and also cache it locally so that, say, one machine might collect data and one (or several others) can visualize. For such tasks, Redis is unparalleled. (Yet I also always felt Redis could do with another, simple, short and sweet introduction stressing the key features of i) being multi-lingual: write in one language, consume in another and ii) loose coupling: no linking as one talks to Redis via standard tcp/ip networking. So I wrote a new intro vignette that is now in RcppRedis. I hope this comes in handy. Comments welcome!) Our RcppRedis package had long been used for such tasks, and it was easy to set it up. Standard use is to loop, fetch some data, push it to Redis, sleep, and start over. Clients do the same: fetch most recent data, plot or report it, sleep, start over. That works, but it has a dual delay as the client sleeping may miss the data update! The standard answer to this is called publish/pubscribe, or pub/sub. Libraries such as 0mq or zeromq specialise in this. But it turns out Redis already has it. I had some initial difficulty adding it to RcppRedis so for a trial I tested the marvellous rredis package by Bryan and simply instantiated two Redis clients. Now the data getter simply publishes a new data point in a given channel, by convention named after the security it tracks. Clients register with the Redis server which does all the actual work of keeping track of who listens to what. The clients now simply listen (which is a blocking operation) and as soon as data comes in receive it. market monitor This is quite mesmerizing when you just run two command-line clients (in a byobu session, say). As sone as the data is written (as shown on console log) it is consumed. No measruable overhead. Just lovely. Bryan and I then talked a litte as he may or may not retire rredis. Having implemented the pub/sub logic for both sides once, he took a good hard look at RcppRedis and just like that added it there. With some really clever wrinkles for (optional) per-symbol callback as closure attached to the instance. Truly amazeballs And once we had it in there, generalizing from publishing or subscribing to just one symbol easily generalizes to having one listener collect and publish for multiple symbols, and having one or more clients subscribe and listen one, more or even all symbol. All with ease thanks tp Redis. The second chart, also from a few days ago, shows four symbols for four (front-contract) futures for Bitcoin, Crude Oil, SP500, and Gold. As all this can get a little technical, I wrote a second vignette for RcppRedis on just this: market monitoring. Give this a read, if interested, feedback on this one is most welcome too! But all the code you need is included in the package just run a local Redis instance. Before closing, one sour note. I uploaded all this in a new and much improved updated RcppRedis 0.2.0 to CRAN on March 13 ten days ago. Not only is it still not there , but CRAN in their most delightful way also refuses to answer any emails of mine. Just lovely. The package exhibited just one compiler warning: a C++ compiler objected to the (embedded) C library hiredis (included as a fallback) for using a C language construct. Yes. A C++ compiler complaining about C. It s a non-issue. Yet it s been ten days and we still have nothing. So irritating and demotivating. Anyway, you can get the package off its GitHub repo. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

23 February 2022

Jonathan McDowell: Upgrading my home internet; a story of yak shaving

RB5009 This has ended up longer than I expected. I ll write up posts about some of the individual steps with some more details at some point, but this is an overview of the yak shaving I engaged in. The TL;DR is:

The desire for a faster connection When I migrated my home connection to FTTP I kept the same 80M/20M profile I d had on FTTC. I didn t have a pressing need for faster, and I saved money because I was no longer paying for the phone line portion. I wanted more, but at the time I think the only option was for a 160M/30M profile instead and I didn t need it and it wasn t enough better to convince me. Time passed and BT rolled out their GigE (really 900M) download option. And again, I didn t need it, but I wanted it. My provider, Aquiss, initially didn t offer this (I think they had up to 330M download options available by this point). So I stayed on 80M/20M. And the only time I really wanted it to be faster was when pushing off-site backups to rsync.net. Of course, we ve had the pandemic, and that s involved 2 adults working from home with plenty of video calls throughout the day. The 80M/20M connection has proved rock solid for this, so again, I didn t feel an upgrade was justified. We got a 4K capable TV last year and while the bandwidth usage for 4K streaming is noticeably higher, again the connection can handle it no problem. At some point last year I noticed Aquiss had added speed options all the way to 900M down. At the end of the year I accepted a new role, which is fully remote, so I had a bit of an acceptance about the fact that I wasn t going back into an office any time soon. The combination (and the desire for the increased upload speed) finally allowed me to justify the upgrade to myself.

Testing the current setup for bottlenecks The first thing to do was see whether my internal network could cope with an upgrade. I m mostly running Cat6 GigE so I wasn t worried about that side of things. However I m using an RB3011 as my core router, and while it has some coprocessors for routing acceleration they re not supported under mainline Linux (and unlikely to be any time soon). So I had to benchmark what it was capable of routing. I run a handful of VLANs within my home network, with stateful firewalling between them, so I felt that would be a good approximation of the maximum speed to the outside world I might be able to get if I had the external connection upgraded. I went for the easy approach and fired up iPerf3 on 2 hosts, both connected via ethernet but on separate networks, so routed through the RB3011. That resulted in slightly more than a 300Mb/s throughput. Ok. I confirmed that I could get 900Mb/s+ on 2 hosts both on the same network, just to be sure there wasn t some other issue I was missing. Nope, so unsurprisingly the router was the bottleneck. So. To upgrade my internet speed I need to upgrade my router. I could just buy something off the shelf, but I like being able to run Debian (or OpenWRT) on the router rather than some horrible vendor firmware. Lucky MikroTik launched the RB5009 towards the end of last year. RouterOS is probably more than capable, but what really interested me was the fact it s an ARM64 platform based on an Armada 7040, which is pretty well supported in mainline kernels already. There s a 10G connection from the internal switch to the CPU, as well as a 2.5Gb/s ethernet port and a 10G SFP+ cage. All good stuff. I ordered one just before the New Year. Thankfully the OpenWRT folk had done all of the hard work on getting a mainline kernel booting on the device; Sergey Sergeev and Robert Marko in particular fighting RouterBoot and producing a suitable device tree file to get everything up and running. I ended up soldering a serial console connection up to aid debugging, and lightly patching Rob s u-boot to fix the incorrect RAM size reported by RouterBoot. A few kernel tweaks were necessary to make the networking entirely happy and at that point it was time to think about actually doing a replacement.

Upgrading to Debian 11 (bullseye) My RB3011 is currently running Debian 10 (buster); an upgrade has been on my todo list, but with the impending replacement I decided I d hold off and create a new Debian 11 (bullseye) image for the RB5009. Additionally, I don t actually run off the internal NAND in the RB3011; I have a USB flash drive for the rootfs and just the kernel booting off internal NAND. Originally this was for ease of testing, then a combination of needing to figure out a good read-only root solution and a small enough image to fit in the 120M available. For the upgrade I decided to finally look at these pieces. I ve ended up with a script that will build me a squashfs image, and the initial rootfs takes care of mounting this and then a tmpfs as an overlay fs. That means I can easily see what pieces are being written to. The RB5009 has a total of 1G NAND so I m not as space constrained, but the squashfs ends up under 50M. I ve added some additional pieces to allow me to pre-populate the overlay fs with updates rather than always needing to rebuild the squashfs image. With that done I decided to try it out on the RB3011; I tweaked the build script to be able to build for armhf (the RB3011) or arm64 (the RB5009) and to deal with some slight differences in configuration between the two (e.g. interface naming). The idea here was to ensure I d got all the appropriate configuration sorted for the RB5009, in the known-good existing environment. Everything is still on a USB stick at this stage and the new device has an armhf busybox root meaning it can be used on either device, and the init script detects the architecture to select the appropriate squashfs to mount.

A problem with ESP8266 home automation devices Everything seemed to work fine - a few niggles with the watchdog, which is overly sensitive on the RB3011, but I got those sorted (and the build script updated) and the device came up and successfully did the PPPoE dance to bring up external connectivity. And then I noticed that my home automation devices were having problems connecting to the mosquitto MQTT server. It turned out it was only the ESP8266 based devices that were failing, and examining the serial debug output on one of my test devices revealed it was hitting an out of memory issue (displaying E:M 280) when establishing the TLS MQTT connection. I rolled back to the Debian 10 image and set about creating a test environment to look at the ESP8266 issues. My first action was to try and reduce my RAM footprint to try and ensure there was enough spare to establish the connection. I moved a few functions that were still sitting in IRAM into flash. I cleaned up a couple of buffers that are on the stack to be more correctly sized. I tried my new image, and I didn t get the memory issue. Instead I progressed a bit further and got a watchdog reset. Doh! It was obviously something related to the TLS connection, but I couldn t easily see what the difference was; the same x509 cert was in use, it looked like the initial handshake was the same (and trying with openssl s_client looked pretty similar too). I set about instrumenting the ancient Mbed TLS used in the Espressif SDK and discovered that whatever had changed between buster + bullseye meant the EPS8266 was now trying a TLS-DHE-RSA-WITH-AES-256-CBC-SHA256 handshake instead of a TLS-RSA-WITH-AES-256-CBC-SHA256 handshake and that was causing enough extra CPU usage that it couldn t complete in time and the watchdog kicked in. So I commented out MBEDTLS_KEY_EXCHANGE_DHE_RSA_ENABLED in the config_esp.h for mbedtls and rebuilt things. Hacky, but I ll go back to trying to improve this generally at some point.

A detour into interrupt load Now, my testing of the RB3011 image is generally done at weekends, when I have enough time to tear down and rebuild the connection rather than doing it in the evening and having limited time to get things working again in time for work in the morning. So at the point I had an image ready to go I pulled the trigger on the line upgrade. I went with the 500M/75M option rather than the full 900M - I suspect I d have difficulty actually getting that most of the time and 75M of upload bandwidth seems fairly substantial for now. It only took a couple of days from the order to the point the line was regraded (which involved no real downtime - just a reconnection in the night). Of course this happened just after the weekend I d discovered the ESP8266 issue. collectd CPU usage for RB3011 This provided an opportunity to see just what the RB3011 could actually manage. In the configuration I had it turned out to be not much more than the 80Mb/s speeds I had previously seen. The upload jumped from a solid 20Mb/s to 75Mb/s, so I knew the regrade had actually happened. Looking at CPU utilisation clearly showed the problem; softirqs were using almost 100% of a CPU core. Now, the way the hardware is setup on the RB3011 is that there are two separate 5 port switches, each connected back to the CPU via a separate GigE interface. For various reasons I had everything on a single switch, which meant that all traffic was boomeranging in and out of the same CPU interface. The IPQ8064 has dual cores, so I thought I d try moving the external connection to the other switch. That puts it on its own GigE CPU interface, which then allows binding the interrupts to a different CPU core. That helps; throughput to the outside world hits 140Mb/s+. Still a long way from the expected max, but proof we just need more grunt.

Success collectd CPU usage for RB5009 Which brings us to this past weekend, when, having worked out all the other bits, I tried the squashfs root image again on the RB3011. Success! The home automation bits connected to it, the link to the outside world came up, everything seemed happy. So I double checked my bootloader bits on the RB5009, brought it down to the comms room and plugged it in instead. And, modulo my failing to update the nftables config to allow it to do forwarding, it all came up ok. Some testing with iperf3 internally got a nice 912Mb/s sustained between subnets, and some less scientific testing with wget + speedtest-cli saw speeds of over 460Mb/s to the outside world. Time from ordering the router until it was in service? Just under 8 weeks

5 February 2022

Reproducible Builds: Reproducible Builds in January 2022

Welcome to the January 2022 report from the Reproducible Builds project. In our reports, we try outline the most important things that have been happening in the past month. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website.
An interesting blog post was published by Paragon Initiative Enterprises about Gossamer, a proposal for securing the PHP software supply-chain. Utilising code-signing and third-party attestations, Gossamer aims to mitigate the risks within the notorious PHP world via publishing attestations to a transparency log. Their post, titled Solving Open Source Supply Chain Security for the PHP Ecosystem goes into some detail regarding the design, scope and implementation of the system.
This month, the Linux Foundation announced SupplyChainSecurityCon, a conference focused on exploring the security threats affecting the software supply chain, sharing best practices and mitigation tactics. The conference is part of the Linux Foundation s Open Source Summit North America and will take place June 21st 24th 2022, both virtually and in Austin, Texas.

Debian There was a significant progress made in the Debian Linux distribution this month, including:

Other distributions kpcyrd reported on Twitter about the release of version 0.2.0 of pacman-bintrans, an experiment with binary transparency for the Arch Linux package manager, pacman. This new version is now able to query rebuilderd to check if a package was independently reproduced.
In the world of openSUSE, however, Bernhard M. Wiedemann posted his monthly reproducible builds status report.

diffoscope diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploaded versions 199, 200, 201 and 202 to Debian unstable (that were later backported to Debian bullseye-backports by Mattia Rizzolo), as well as made the following changes to the code itself:
  • New features:
    • First attempt at incremental output support with a timeout. Now passing, for example, --timeout=60 will mean that diffoscope will not recurse into any sub-archives after 60 seconds total execution time has elapsed. Note that this is not a fixed/strict timeout due to implementation issues. [ ][ ]
    • Support both variants of odt2txt, including the one provided by the unoconv package. [ ]
  • Bug fixes:
    • Do not return with a UNIX exit code of 0 if we encounter with a file whose human-readable metadata matches literal file contents. [ ]
    • Don t fail if comparing a nonexistent file with a .pyc file (and add test). [ ][ ]
    • If the debian.deb822 module raises any exception on import, re-raise it as an ImportError. This should fix diffoscope on some Fedora systems. [ ]
    • Even if a Sphinx .inv inventory file is labelled The remainder of this file is compressed using zlib, it might not actually be. In this case, don t traceback and simply return the original content. [ ]
  • Documentation:
    • Improve documentation for the new --timeout option due to a few misconceptions. [ ]
    • Drop reference in the manual page claiming the ability to compare non-existent files on the command-line. (This has not been possible since version 32 which was released in September 2015). [ ]
    • Update X has been modified after NT_GNU_BUILD_ID has been applied messages to, for example, not duplicating the full filename in the diffoscope output. [ ]
  • Codebase improvements:
    • Tidy some control flow. [ ]
    • Correct a recompile typo. [ ]
In addition, Alyssa Ross fixed the comparison of CBFS names that contain spaces [ ], Sergei Trofimovich fixed whitespace for compatibility with version 21.12 of the Black source code reformatter [ ] and Zbigniew J drzejewski-Szmek fixed JSON detection with a new version of file [ ].

Testing framework The Reproducible Builds project runs a significant testing framework at tests.reproducible-builds.org, to check packages and other artifacts for reproducibility. This month, the following changes were made:
  • Fr d ric Pierret (fepitre):
    • Add Debian bookworm to package set creation. [ ]
  • Holger Levsen:
    • Install the po4a package where appropriate, as it is needed for the Reproducible Builds website job [ ]. In addition, also run the i18n.sh and contributors.sh scripts [ ].
    • Correct some grammar in Debian live image build output. [ ]
    • Shell monitor improvements:
      • Only show the offline node section if there are offline nodes. [ ]
      • Colorise offline nodes. [ ]
      • Shrink screen usage. [ ][ ][ ]
    • Node health check improvements:
      • Detect if live package builds encounter incomplete snapshots. [ ][ ][ ]
      • Detect if a host is running with today s date (when it should be set artificially in the future). [ ]
    • Use the devscripts package from bullseye-backports on Debian nodes. [ ]
    • Use the Munin monitoring package bullseye-backports on Debian nodes too. [ ]
    • Update New Year handling, needed to be able to detect real and fake dates. [ ][ ]
    • Improve the error message of the script that powercycles the arm64 architecture nodes hosted by Codethink. [ ]
  • Mattia Rizzolo:
    • Use the new --timeout option added in diffoscope version 202. [ ]
  • Roland Clobus:
    • Update the build scripts now that the hooks for live builds are now maintained upstream in the live-build repository. [ ]
    • Show info lines in Jenkins when reproducible hooks have been active. [ ]
    • Use unique folders for the artifacts from each live Debian version. [ ]
  • Vagrant Cascadian:
    • Switch the Debian armhf architecture nodes to use new proxy. [ ]
    • Misc. node maintenance. [ ].

Upstream patches The Reproducible Builds project attempts to fix as many currently-unreproducible packages as possible. In January, we wrote a large number of such patches, including:

And finally If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

4 February 2022

Ian Jackson: EUDCC QR codes vs NHS Travel barcodes vs TAC Verify

The EU Digital Covid Certificate scheme is a format for (digitally signed) vaccination status certificates. Not only EU countries participate - the UK is now a participant in this scheme. I am currently on my way to go skiing in the French Alps. So I needed a certificate that would be accepted in France. AFAICT the official way to do this is to get the international certificate from the NHS, and take it to a French pharmacy who will convert it into something suitably French. (AIUI the NHS international barcode is the same regardless of whether you get it via the NHS website, the NHS app, or a paper letter. NB that there is one barcode per vaccine dose so you have to get the right one - probably that means your booster since there s a 9 month rule!) I read on an forum somewhere that you could use the French TousAntiCovid app to convert the barcode. So I thought I would try that. The TousAntiCovid is Free Softare and on F-Droid, so I was happy to install and use it for this. I also used the French TAC Verify app to check to see what barcodes were accepted. (I found an official document addressed to French professionals recommending this as an option for verifying the status of visitors to one s establishment.) Unfortunately this involves a googlified phone, but one could use a burner phone or ask a friend who s bitten that bullet already. I discovered that, indeed: This made me curious. I used a QR code reader to decode both barcodes. The decodings were identical! A long string of guff starting HC1:. AIUI it is an encoded JWT. But there was a difference in the framing: Binary Eye reported that the NHS barcode used error correction level M (medium, aka 15%). The TousAntiCovid barcode used level L (low, 7%). I had my QR code software regenerate a QR code at level M for the data from the TousAntiCovid code. The result was a QR code which is identical (pixel-wise) to the one from the NHS. So the only difference is the error correction level. Curiously, both L (low, generated by TousAntiCovid, accepted by TAC Verify) and M (medium, generated by NHS, rejected by TAC Verify) are lower than the Q (25") recommended by what I think is the specification. This is all very odd. But the upshot is that I think you can convert the NHS international barcode into something that should work in France simply by passing it through any QR code software to re-encode it at error correction level L (7%). But if you re happy to use the TousAntiCovid app it s probably a good way to store them. I guess I ll find out when I get to France if the converted NHS barcodes work in real establishments. Thanks to the folks behind sanipasse.fr for publishing some helpful backround info and operating a Free Software backed public verification service. Footnote To compare the QR codes pixelwise, I roughly cropped the NHS PDF image using a GUI tool, and then on each of the two images used pnmcrop (to trim the border), pnmscale (to rescale the one-pixel-per-pixel output from Binary Eye) and pnmarith -difference to compare them (producing a pretty squirgly image showing just the pixel edges due to antialiasing).

comment count unavailable comments

19 November 2021

Evgeni Golov: A String is not a String, and that's Groovy!

Halloween is over, but I still have some nightmares to share with you, so sit down, take some hot chocolate and enjoy :) When working with Jenkins, there is almost no way to avoid writing Groovy. Well, unless you only do old style jobs with shell scripts, but y'all know what I think about shell scripts Anyways, Eric have been rewriting the jobs responsible for building Debian packages for Foreman to pipelines (and thus Groovy). Our build process for pull requests is rather simple:
  1. Setup sources - get the orig tarball and adjust changelog to have an unique version for pull requests
  2. Call pbuilder
  3. Upload the built package to a staging archive for testing
For merges, it's identical, minus the changelog adjustment. And if there are multiple packages changed in one go, it runs each step in parallel for each package. Now I've been doing mass changes to our plugin packages, to move them to a shared postinst helper instead of having the same code over and over in every package. This required changes to many packages and sometimes I'd end up building multiple at once. That should be fine, right? Well, yeah, it did build fine, but the upload only happened for the last package. This felt super weird, especially as I was absolutely sure we did test this scenario (multiple packages in one PR) and it worked just fine So I went on a ride though the internals of the job, trying to understand why it didn't work. This requires a tad more information about the way we handle packages for Foreman:
  • the archive is handled by freight
  • it has suites like buster, focal and plugins (that one is a tad special)
  • each suite has components that match Foreman releases, so 2.5, 3.0, 3.1, nightly etc
  • core packages (Foreman etc) are built for all supported distributions (right now: buster and focal)
  • plugin packages are built only once and can be used on every distribution
As generating the package index isn't exactly fast in freight, we tried not not run it too often. The idea was that when we build two packages for the same target (suite/version combination), we upload both at once and run import only once for both. That means that when we build Foreman for buster and focal, this results in two parallel builds and then two parallel uploads (as they end up in different suites). But if we build Foreman and Foreman Installer, we have four parallel builds, but only two parallel uploads, as we can batch upload Foreman and Installer per suite. Well, or so was the theory. The Groovy code, that was supposed to do this looked roughly like this:
def packages_to_build = find_changed_packages()
def repos = [:]
packages_to_build.each   pkg ->
    suite = 'buster'
    component = '3.0'
    target = "$ suite -$ component "
    if (!repos.containsKey(target))  
        repos[target] = []
     
    repos[target].add(pkg)
 
do_the_build(packages_to_build)
do_the_upload(repos)
That's pretty straight forward, no? We create an empty Map, loop over a list of packages and add them to an entry in the map which we pre-create as empty if it doesn't exist. Well, no, the resulting map always ended with only having one element in each target list. And this is also why our original tests always worked: we tested with a PR containing changes to Foreman and a plugin, and plugins go to this special target we have So I started playing with the code (https://groovyide.com/playground is really great for that!), trying to understand why the heck it erases previous data. The first finding was that it just always ended up jumping into the "if map entry not found" branch, even though the map very clearly had the correct entry after the first package was added. The second one was weird. I was trying to minimize the reproducer code (IMHO always a good idea) and switched target = "$ suite -$ component " to target = "lol". Two entries in the list, only one jump into the "map entry not found" branch. What?! So this is clearly related to the fact that we're using String interpolation here. But hey, that's a totally normal thing to do, isn't it?! Admittedly, at this point, I was lost. I knew what breaks, but not why. Luckily, I knew exactly who to ask: Jens. After a brief "well, that's interesting", Jens quickly found the source of our griefs: Double-quoted strings are plain java.lang.String if there s no interpolated expression, but are groovy.lang.GString instances if interpolation is present.. And when we do repos[target] the GString target gets converted to a String, but when we use repos.containsKey() it remains a GString. This is because GStrings get converted to Strings, if the method wants one, but containsKey takes any Object while the repos[target] notation for some reason converts it. Maybe this is because using GString as Map keys should be avoided. We can reproduce this with simpler code:
def map = [:]
def something = "something"
def key = "$ something "
map[key] = 1
println key.getClass()
map.keySet().each  println it.getClass()  
map.keySet().each  println it.equals(key) 
map.keySet().each  println it.equals(key as String) 
Which results in the following output:
class org.codehaus.groovy.runtime.GStringImpl
class java.lang.String
false
true
With that knowledge, the fix was to just use the same repos[target] notation also for checking for existence Groovy helpfully returns null which is false-y when it can't find an entry in a Map absent. So yeah, a String is not always a String, and it'll bite you!

9 November 2021

Benjamin Mako Hill: The Hidden Costs of Requiring Accounts

Should online communities require people to create accounts before participating? This question has been a source of disagreement among people who start or manage online communities for decades. Requiring accounts makes some sense since users contributing without accounts are a common source of vandalism, harassment, and low quality content. In theory, creating an account can deter these kinds of attacks while still making it pretty quick and easy for newcomers to join. Also, an account requirement seems unlikely to affect contributors who already have accounts and are typically the source of most valuable contributions. Creating accounts might even help community members build deeper relationships and commitments to the group in ways that lead them to stick around longer and contribute more.
In a new paper published in Communication Research, I worked with Aaron Shaw provide an answer. We analyze data from natural experiments that occurred when 136 wikis on Fandom.com started requiring user accounts. Although we find strong evidence that the account requirements deterred low quality contributions, this came at a substantial (and usually hidden) cost: a much larger decrease in high quality contributions. Surprisingly, the cost includes lost contributions from community members who had accounts already, but whose activity appears to have been catalyzed by the (often low quality) contributions from those without accounts.
A version of this post was first posted on the Community Data Science blog. The full citation for the paper is: Hill, Benjamin Mako, and Aaron Shaw. 2020. The Hidden Costs of Requiring Accounts: Quasi-Experimental Evidence from Peer Production. Communication Research, 48 (6): 771 95. https://doi.org/10.1177/0093650220910345. If you do not have access to the paywalled journal, please check out this pre-print or get in touch with us. We have also released replication materials for the paper, including all the data and code used to conduct the analysis and compile the paper itself.

13 October 2021

Dirk Eddelbuettel: GitHub Streak: Round Eight

Seven years ago I referenced the Seinfeld Streak used in an earlier post of regular updates to to the Rcpp Gallery:
This is sometimes called Jerry Seinfeld s secret to productivity: Just keep at it. Don t break the streak.
and then showed the first chart of GitHub streaking 366 days:
github activity october 2013 to october 2014github activity october 2013 to october 2014
And six years ago a first follow-up appeared in this post about 731 days:
github activity october 2014 to october 2015github activity october 2014 to october 2015
And five years ago we had a followup at 1096 days
github activity october 2015 to october 2016github activity october 2015 to october 2016
And four years ago we had another one marking 1461 days
github activity october 2016 to october 2017github activity october 2016 to october 2017
And three years ago another one for 1826 days
github activity october 2017 to october 2018github activity october 2017 to october 2018
And two year another one bringing it to 2191 days
github activity october 2018 to october 2019github activity october 2018 to october 2019
And last year another one bringing it to 2257 days
github activity october 2019 to october 2020github activity october 2019 to october 2020
And as today is October 12, here is the newest one from 2020 to 2021 with a new total of 2922 days:
github activity october 2020 to october 2021github activity october 2020 to october 2021
Again, special thanks go to Alessandro Pezz for the Chrome add-on GithubOriginalStreak.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

13 September 2021

John Goerzen: Facebook s Blocking Decisions Are Deliberate Including Their Censorship of Mastodon

In the aftermath of my report of Facebook censoring mentions of the open-source social network Mastodon, there was a lot of conversation about whether or not this was deliberate. That conversation seemed to focus on whether a human speficially added joinmastodon.org to some sort of blacklist. But that s not even relevant. OF COURSE it was deliberate, because of how Facebook tunes its algorithm. Facebook s algorithm is tuned for Facebook s profit. That means it s tuned to maximize the time people spend on the site engagement. In other words, it is tuned to keep your attention on Facebook. Why do you think there is so much junk on Facebook? So much anti-vax, anti-science, conspiracy nonsense from the likes of Breitbart? It s not because their algorithm is incapable of surfacing the good content; we already know it can because they temporarily pivoted it shortly after the last US election. They intentionally undid its efforts to make high-quality news sources more prominent twice. Facebook has said that certain anti-vax disinformation posts violate its policies. It has an extremely cumbersome way to report them, but it can be done and I have. These reports are met with either silence or a response claiming the content didn t violate their guidelines. So what algorithm is it that allows Breitbart to not just be seen but to thrive on the platform, lets anti-vax disinformation survive even a human review, while banning mentions of Mastodon? One that is working exactly as intended. We may think this algorithm is busted. Clearly, Facebook does not. If their goal is to maximize profit by maximizing engagement, the algorithm is working exactly as designed. I don t know if joinmastodon.org was specifically blacklisted by a human. Nor is it relevant. Facebook s choice to tolerate and promote the things that service its greed for engagement and money, even if they are the lowest dregs of the web, is deliberate. It is no accident that Breitbart does better than Mastodon on Facebook. After all, which of these does its algorithm detect keep people engaged on Facebook itself more? Facebook removes the ban You can see all the screenshots of the censorship in my original post. Now, Facebook has reversed course: We also don t know if this reversal was human or algorithmic, but that still is beside the point. The point is, Facebook intentionally chooses to surface and promote those things that drive engagement, regardless of quality. Clearly many have wondered if tens of thousands of people have died unnecessary deaths over COVID as a result. One whistleblower says I have blood on my hands and President Biden said they re killing people before walking back his comments slightly . I m not equipped to verify those statements. But what do they think is going to happen if they prioritize engagement over quality? Rainbows and happiness?

11 September 2021

John Goerzen: Facebook Is Censoring People For Mentioning Open-Source Social Network Mastodon

Update: Facebook has reversed itself over this censorship, but I maintain that whether the censorship was algorithmic or human, it was intentional either way. Details in my new post. Last November, I made a brief post to Facebook about Mastodon. Mastodon is an open-source and open social network, which is decentralized and all about user control instead of corporate control. I ve blogged about Mastodon and the dangers of Facebook before, but rarely mentioned Mastodon on Facebook itself. Today, I received this notice that Facebook had censored my post about Mastodon: Facebook censoring a post Wonder with me for a second what this one-off post I composed myself might have done to trip Facebook s filter . and it is probably obvious that what tripped the filter was the mention of an open source competitor, even though Facebook is much more enormous than Mastodon. I have been a member of Facebook for many years, and this is the one and only time anything like that has happened. Why they decided today to take down that post I have no idea. In case you wondered about their sincerity towards stamping out misinformation which, on the rare occasions they do something about, they deprioritize rather than remove as they did here this probably answers your question. Or, are they sincere about thinking they re such a force for good by connecting the world s people? Well, only so long as the world s people don t say nice things about alternatives to Facebook, I guess. Well, you might be wondering, Why not appeal, since they obviously made a mistake? Because, of course, you can t: Indeed I did tick a box that said I disagreed, but there was no place to ask why or to question their action. So what would cause a non-controversial post from a long-time Facebook member that has never had anything like this happen, to disappear? Greed. Also fear. Maybe I d feel sorry for them if they weren t acting like a bully. Edit: There are reports from several others on Mastodon of the same happening this week. I am trying to gather more information. It sounds like it may be happening on Twitter as well. Edit 2: And here are some other reports from both Facebook and Twitter. Definitely not just me. Edit 3: While trying to reply to someone on Facebook, that was trying to defend Facebook, I mentioned joinmastodon.org and got this: Anyone else seeing it? Edit 4: It is far more than just me, clearly. More reports are out there; for instance, this one and that one.

10 September 2021

Enrico Zini: A nightmare of confcalls and microphones

I had this nightmare where I had a very, very important confcall. I joined with Chrome. Chrome said Failed to access your microphone - Cannot use microphone for an unknown reason. Could not start audio source. I joined with Firefox. Firefox chose Monitor of Built-in Audio Analog Stereo as a microphone, and did not let me change it. Not in the browser, not in pavucontrol. I joined with the browser on my phone, and the webpage said This meeting needs to use your microphone and camera. Select *Allow* when your browser asks for permissions. But the question never came. I could hear people talking. I had very important things to say. I tried typing them in the chat window, but they weren't seeing it. The meeting ended. I was on the verge of tears.
Tell me, Mr. Anderson, what good is a phone call when you are unable to speak?
Since this nightmare happened for real, including the bit about tears in the end, let's see that it doesn't happen again. I should now have three working systems, which hopefully won't all break again all at the same time. Fixing Chrome I can reproduce this reliably, on Bullseye's standard Chromium 90.0.4430.212-1, just launched on an empty profile, no extensions. The webpage has camera and microphone allowed. Chrome doesn't show up in the recording tab of pulseaudio. Nothing on Chrome's stdout/stderr. JavaScript console has:
Logger.js:154 2021-09-10Txx:xx:xx.xxxZ [features/base/tracks] Failed to create local tracks
Array(2)
DOMException: Could not start audio source
I found the answer here:
I had the similar problem once with chromium. i could solve it by switching in preferences->microphone-> from "default" to "intern analog stereo".
Opening the little popup next to the microphone/mute button allows choosing other microphones, which work. Only "Same as system (Default)" does not work. Fixing Firefox I have firefox-esr 78.13.0esr-1~deb11u1. In Jitsi, microphone selection is disabled on the toolbar and in the settings menu. In pavucontrol, changing the recording device for Firefox has no effect. If for some reason the wrong microphone got chosen, those are not ways of fixing it. What I found works is to click on the camera permission icon, remove microphone permission, then reload the page. At that point Firefox will ask for permission again, and that microphone selection seems to work. Relevant bugs: on Jitsi and on Firefox. Since this is well known (once you find the relevant issues), I'd have appreciated Jitsi at least showing a link to an explanation of workarounds on Firefox, instead of just disabling microphone selection. Fixing Jitsi on the phone side I really don't want to preemptively give camera and microphone permissions to my phone browser. I noticed that there's the Jitsi app on F-Droid and much as I hate to use an app when a website would work, at least in this case it's a way to keep the permission sets separate, so I installed that. Fixing pavucontrol? I tried to find out why I can't change input device for FireFox on pavucontrol. I only managed to find an Ask Ubuntu question with no answer and a Unix StackExchange question with no answer.

9 August 2021

Russ Allbery: Review: The Last Battle

Review: The Last Battle, by C.S. Lewis
Illustrator: Pauline Baynes
Series: Chronicles of Narnia #7
Publisher: Collier Books
Copyright: 1956
Printing: 1978
ISBN: 0-02-044210-6
Format: Mass market
Pages: 184
The Last Battle is the seventh and final book of the Chronicles of Narnia in every reading order. It ties together (and spoils) every previous Narnia book, so you do indeed want to read it last (or skip it entirely, but I'll get into that). In the far west of Narnia, beyond the Lantern Waste and near the great waterfall that marks Narnia's western boundary, live a talking ape named Shift and a talking donkey named Puzzle. Shift is a narcissistic asshole who has been gaslighting and manipulating Puzzle for years, convincing the poor donkey that he's stupid and useless for anything other than being Shift's servant. At the start of the book, a lion skin washes over the waterfall and into the Cauldron Pool. Shift, seeing a great opportunity, convinces Puzzle to retrieve it. The king of Narnia at this time is Tirian. I would tell you more about Tirian except, despite being the protagonist, that's about all the characterization he gets. He's the king, he's broad-shouldered and strong, he behaves in a correct kingly fashion by preferring hunting lodges and simple camps to the capital at Cair Paravel, and his close companion is a unicorn named Jewel. Other than that, he's another character like Rilian from The Silver Chair who feels like he was taken from a medieval Arthurian story. (Thankfully, unlike Rilian, he doesn't talk like he's in a medieval Arthurian story.) Tirian finds out about Shift's scheme when a dryad appears at Tirian's camp, calling for justice for the trees of Lantern Waste who are being felled. Tirian rushes to investigate and stop this monstrous act, only to find the beasts of Narnia cutting down trees and hauling them away for Calormene overseers. When challenged on why they would do such a thing, they reply that it's at Aslan's orders. The Last Battle is largely the reason why I decided to do this re-read and review series. It is, let me be clear, a bad book. The plot is absurd, insulting to the characters, and in places actively offensive. It is also, unlike the rest of the Narnia series, dark and depressing for nearly all of the book. The theology suffers from problems faced by modern literature that tries to use the Book of Revelation and related Christian mythology as a basis. And it is, most famously, the site of one of the most notorious authorial betrayals of a character in fiction. And yet, The Last Battle, probably more than any other single book, taught me to be a better human being. It contains two very specific pieces of theology that I would now critique in multiple ways but which were exactly the pieces of theology that I needed to hear when I first understood them. This book steered me away from a closed, judgmental, and condemnatory mindset at exactly the age when I needed something to do that. For that, I will always have a warm spot in my heart for it. I'm going to start with the bad parts, though, because that's how the book starts. MAJOR SPOILERS BELOW. First, and most seriously, this is a second-order idiot plot. Shift shows up with a donkey wearing a lion skin (badly), only lets anyone see him via firelight, claims he's Aslan, and starts ordering the talking animals of Narnia to completely betray their laws and moral principles and reverse every long-standing political position of the country... and everyone just nods and goes along with this. This is the most blatant example of a long-standing problem in this series: Lewis does not respect his animal characters. They are the best feature of his world, and he treats them as barely more intelligent than their non-speaking equivalents and in need of humans to tell them what to do. Furthermore, despite the assertion of the narrator, Shift is not even close to clever. His deception has all the subtlety of a five-year-old who doesn't want to go to bed, and he offers the Narnians absolutely nothing in exchange for betraying their principles. I can forgive Puzzle for going along with the scheme since Puzzle has been so emotionally abused that he doesn't know what else to do, but no one else has any excuse, especially Shift's neighbors. Given his behavior in the book, everyone within a ten mile radius would be so sick of his whining, bullying, and lying within a month that they'd never believe anything he said again. Rishda and Ginger, a Calormene captain and a sociopathic cat who later take over Shift's scheme, do qualify as clever, but there's no realistic way Shift's plot would have gotten far enough for them to get involved. The things that Shift gets the Narnians to do are awful. This is by far the most depressing book in the series, even more than the worst parts of The Silver Chair. I'm sure I'm not the only one who struggled to read through the first part of this book, and raced through it on re-reads because everything is so hard to watch. The destruction is wanton and purposeless, and the frequent warnings from both characters and narration that these are the last days of Narnia add to the despair. Lewis takes all the beautiful things that he built over six books and smashes them before your eyes. It's a lot to take, given that previous books would have treated the felling of a single tree as an unspeakable catastrophe. I think some of these problems are due to the difficulty of using Christian eschatology in a modern novel. An antichrist is obligatory, but the animals of Narnia have no reason to follow an antichrist given their direct experience with Aslan, particularly not the aloof one that Shift tries to give them. Lewis forces the plot by making everyone act stupidly and out of character. Similarly, Christian eschatology says everything must become as awful as possible right before the return of Christ, hence the difficult-to-read sections of Narnia's destruction, but there's no in-book reason for the Narnians' complicity in that destruction. One can argue about whether this is good theology, but it's certainly bad storytelling. I can see the outlines of the moral points Lewis is trying to make about greed and rapacity, abuse of the natural world, dubious alliances, cynicism, and ill-chosen prophets, but because there is no explicable reason for Tirian's quiet kingdom to suddenly turn to murderous resource exploitation, none of those moral points land with any force. The best moral apocalypse shows the reader how, were they living through it, they would be complicit in the devastation as well. Lewis does none of that work, so the reader is just left angry and confused. The book also has several smaller poor authorial choices, such as the blackface incident. Tirian, Jill, and Eustace need to infiltrate Shift's camp, and use blackface to disguise themselves as Calormenes. That alone uncomfortably reveals how much skin tone determines nationality in this world, but Lewis makes it far worse by having Tirian comment that he "feel[s] a true man again" after removing the blackface and switching to Narnian clothes. All of this drags on and on, unlike Lewis's normally tighter pacing, to the point that I remembered this book being twice the length of any other Narnia book. It's not; it's about the same length as the rest, but it's such a grind that it feels interminable. The sum total of the bright points of the first two-thirds of the book are the arrival of Jill and Eustace, Jill's one moment of true heroism, and the loyalty of a single Dwarf. The rest is all horror and betrayal and doomed battles and abject stupidity. I do, though, have to describe Jill's moment of glory, since I complained about her and Eustace throughout The Silver Chair. Eustace is still useless, but Jill learned forestcraft during her previous adventures (not that we saw much sign of this previously) and slips through the forest like a ghost to steal Puzzle and his lion costume out from the under the nose of the villains. Even better, she finds Puzzle and the lion costume hilarious, which is the one moment in the book where one of the characters seems to understand how absurd and ridiculous this all is. I loved Jill so much in that moment that it makes up for all of the pointless bickering of The Silver Chair. She doesn't get to do much else in this book, but I wish the Jill who shows up in The Last Battle had gotten her own book. The end of this book, and the only reason why it's worth reading, happens once the heroes are forced into the stable that Shift and his co-conspirators have been using as the stage for their fake Aslan. Its door (for no well-explained reason) has become a door to Aslan's Country and leads to a reunion with all the protagonists of the series. It also becomes the frame of Aslan's final destruction of Narnia and judging of its inhabitants, which I suspect would be confusing if you didn't already know something about Christian eschatology. But before that, this happens, which is sufficiently and deservedly notorious that I think it needs to be quoted in full.
"Sir," said Tirian, when he had greeted all these. "If I have read the chronicle aright, there should be another. Has not your Majesty two sisters? Where is Queen Susan?" "My sister Susan," answered Peter shortly and gravely, "is no longer a friend of Narnia." "Yes," said Eustace, "and whenever you've tried to get her to come and talk about Narnia or do anything about Narnia, she says 'What wonderful memories you have! Fancy your still thinking about all those funny games we used to play when we were children.'" "Oh Susan!" said Jill. "She's interested in nothing nowadays except nylons and lipstick and invitations. She always was a jolly sight too keen on being grown-up." "Grown-up indeed," said the Lady Polly. "I wish she would grow up. She wasted all her school time wanting to be the age she is now, and she'll waste all the rest of her life trying to stay that age. Her whole idea is to race on to the silliest time of one's life as quick as she can and then stop there as long as she can."
There are so many obvious and dire problems with this passage, and so many others have written about it at length, that I will only add a few points. First, I find it interesting that neither Lucy nor Edmund says a thing. (I would like to think that Edmund knows better.) The real criticism comes from three characters who never interacted with Susan in the series: the two characters introduced after she was no longer allowed to return to Narnia, and a character from the story that predated hers. (And Eustace certainly has some gall to criticize someone else for treating Narnia as a childish game.) It also doesn't say anything good about Lewis that he puts his rather sexist attack on Susan into the mouths of two other female characters. Polly's criticism is a somewhat generic attack on puberty that could arguably apply to either sex (although "silliness" is usually reserved for women), but Jill makes the attack explicitly gendered. It's the attack of a girl who wants to be one of the boys on a girl who embraces things that are coded feminine, and there's a whole lot of politics around the construction of gender happening here that Lewis is blindly reinforcing and not grappling with at all. Plus, this is only barely supported by single sentences in The Voyage of the Dawn Treader and The Horse and His Boy and directly contradicts the earlier books. We're expected to believe that Susan the archer, the best swimmer, the most sensible and thoughtful of the four kids has abruptly changed her whole personality. Lewis could have made me believe Susan had soured on Narnia after the attempted kidnapping (and, although left unstated, presumably eventual attempted rape) in The Horse and His Boy, if one ignores the fact that incident supposedly happens before Prince Caspian where there is no sign of such a reaction. But not for those reasons, and not in that way. Thankfully, after this, the book gets better, starting with the Dwarfs, which is one of the two passages that had a profound influence on me. Except for one Dwarf who allied with Tirian, the Dwarfs reacted to the exposure of Shift's lies by disbelieving both Tirian and Shift, calling a pox on both their houses, and deciding to make their own side. During the last fight in front of the stable, they started killing whichever side looked like they were winning. (Although this is horrific in the story, I think this is accurate social commentary on a certain type of cynicism, even if I suspect Lewis may have been aiming it at atheists.) Eventually, they're thrown through the stable door by the Calormenes. However, rather than seeing the land of beauty and plenty that everyone else sees, they are firmly convinced they're in a dark, musty stable surrounded by refuse and dirty straw. This is, quite explicitly, not something imposed on them. Lucy rebukes Eustace for wishing Tash had killed them, and tries to make friends with them. Aslan tries to show them how wrong their perceptions are, to no avail. Their unwillingness to admit they were wrong is so strong that they make themselves believe that everything is worse than it actually is.
"You see," said Aslan. "They will not let us help them. They have chosen cunning instead of belief. Their prison is only in their own minds, yet they are in that prison; and so afraid of being taken in that they cannot be taken out."
I grew up with the US evangelical version of Hell as a place of eternal torment, which in turn was used to justify religious atrocities in the name of saving people from Hell. But there is no Hell of that type in this book. There is a shadow into which many evil characters simply disappear, and there's this passage. Reading this was the first time I understood the alternative idea of Hell as the absence of God instead of active divine punishment. Lewis doesn't use the word "Hell," but it's obvious from context that the Dwarfs are in Hell. But it's not something Aslan does to them and no one wants them there; they could leave any time they wanted, but they're too unwilling to be wrong. You may have to be raised in conservative Christianity to understand how profoundly this rethinking of Hell (which Lewis tackles at greater length in The Great Divorce) undermines the system of guilt and fear that's used as motivation and control. It took me several re-readings and a lot of thinking about this passage, but this is where I stopped believing in a vengeful God who will eternally torture nonbelievers, and thus stopped believing in all of the other theology that goes with it. The second passage that changed me is Emeth's story. Emeth is a devout Calormene, a follower of Tash, who volunteered to enter the stable when Shift and his co-conspirators were claiming Aslan/Tash was inside. Some time after going through, he encounters Aslan, and this is part of his telling of that story (and yes, Lewis still has Calormenes telling stories as if they were British translators of the Arabian Nights):
[...] Lord, is it then true, as the Ape said, that thou and Tash are one? The Lion growled so that the earth shook (but his wrath was not against me) and said, It is false. Not because he and I are one, but because we are opposites, I take to me the services which thou hast done to him. For I and he are of such different kinds that no service which is vile can be done to me, and none which is not vile can be done to him. Therefore if any man swear by Tash and keep his oath for the oath's sake, it is by me that he has truly sworn, though he know it not, and it is I who reward him. And if any man do a cruelty in my name, then, though he says the name Aslan, it is Tash whom he serves and by Tash his deed is accepted. Dost thou understand, Child? I said, Lord, thou knowest how much I understand. But I said also (for the truth constrained me), Yet I have been seeking Tash all my days. Beloved, said the Glorious One, unless thy desire had been for me, thou wouldst not have sought so long and so truly. For all find what they truly seek.
So, first, don't ever say this to anyone. It's horribly condescending and, since it's normally said by white Christians to other people, usually explicitly colonialist. Telling someone that their god is evil but since they seem to be a good person they're truly worshiping your god is only barely better than saying yours is the only true religion. But it is better, and as someone who, at the time, was wholly steeped in the belief that only Christians were saved and every follower of another religion was following Satan and was damned to Hell, this passage blew my mind. This was the first place I encountered the idea that someone who followed a different religion could be saved, or that God could transcend religion, and it came with exactly the context and justification that I needed given how close-minded I was at the time. Today, I would say that the Christian side of this analysis needs far more humility, and fobbing off all the evil done in the name of the Christian God by saying "oh, those people were really following Satan" is a total moral copout. But, nonetheless, Lewis opened a door for me that I was able to step through and move beyond to a less judgmental, dismissive, and hostile view of others. There's not much else in the book after this. It's mostly Lewis's charmingly Platonic view of the afterlife, in which the characters go inward and upward to truer and more complete versions of both Narnia and England and are reunited (very briefly) with every character of the series. Lewis knows not to try too hard to describe the indescribable, but it remains one of my favorite visions of an afterlife because it makes so explicit that this world is neither static or the last, but only the beginning of a new adventure. This final section of The Last Battle is deeply flawed, rather arrogant, a little bizarre, and involves more lectures on theology than precise description, but I still love it. By itself, it's not a bad ending for the series, although I don't think it has half the beauty or wonder of the end of The Voyage of the Dawn Treader. It's a shame about the rest of the book, and it's a worse shame that Lewis chose to sacrifice Susan on the altar of his prejudices. Those problems made it very hard to read this book again and make it impossible to recommend. Thankfully, you can read the series without it, and perhaps most readers would be better off imagining their own ending (or lack of ending) to Narnia than the one Lewis chose to give it. But the one redeeming quality The Last Battle will always have for me is that, despite all of its flaws, it was exactly the book that I needed to read when I read it. Rating: 4 out of 10

13 July 2021

Debian XMPP Team: XMPP Novelties in Debian 11 Bullseye

This is not only the Year of the Ox, but also the year of Debian 11, code-named bullseye. The release lies ahead, full freeze starts this week. A good opportunity to take a look at what is new in bullseye. In this post new programs and new software versions related to XMPP, also known as Jabber are presented. XMPP exists since 1999, and has a diverse and active developers community. It is a universal communication protocol, used for instant messaging, IoT, WebRTC, and social applications. You probably will encounter some oxen in this post. That's all for now. Enjoy Debian 11 bullseye and Happy Chatting!

28 June 2021

Shirish Agarwal: Indian Capital Markets, BSE, NSE

I had been meaning to write on the above topic for almost a couple of months now but just kept procrastinating about it. That push came to a shove when Sucheta Dalal and Debasis Basu shared their understanding, wisdom, and all in the new book called Absolute Power Inside story of the National Stock Exchange s amazing success, leading to hubris, regulatory capture and algo scam . Now while I will go into the details of the new book as currently, I have not bought it but even if I had bought it and shared some of the revelations from it, it wouldn t have done justice to either the book or what is sharing before knowing some of the background before it.

Before I jump ahead, I would suggest people to read my sort of introductory blog post on banking history so they know where I m coming from. I m going to deviate a bit from Banking as this is about trade and capital markets, although Banking would come in later on. And I will also be sharing some cultural insights along with history so people are aware of why things happened the way they did. Calicut, Calcutta, Kolkata, one-time major depot around the world Now, one cannot start any topic about trade without talking about Kolkata. While today, it seems like a bastion of communism, at one time it was one of the major trade depots around the world. Both William Dalrymple and the Chinese have many times mentioned Kolkata as being one of the major centers of trade. This was between the 13th and the late 19th century. A cursory look throws up this article which talks about Kolkata or Calicut as it was known as a major trade depot. There are of course many, many articles and even books which do tell about how Kolkata was a major trade depot. Now between the 13th and 19th century, a lot of changes happened which made Kolkata poorer and shifted trade to Mumbai/Bombay which in those times was nothing but just a port city like many others.

The Rise of the Zamindar Around the 15th century when Babur Invaded Hindustan, he realized that Hindustan is too big a country to be governed alone. And Hindustan was much broader than independent India today. So he created the title of Zamindars. Interestingly, if you look at the Mughal period, they were much more in tune with Hindustani practices than the British who came later. They used the caste divisions and hierarchy wisely making sure that the status quo was maintained as far as castes/creed were concerned. While in-fighting with various rulers continued, it was more or less about land and power other than anything else. When the Britishers came they co-opted the same arrangement with a minor adjustment. While in the before system, the zamindars didn t have powers to be landowners. The Britishers gave them land ownerships. A huge percentage of thess zamindars especially in Bengal were from my own caste Banias or Baniyas. The problem and the solution for the Britishers had been this was a large land to control and exploit and the number of British officers and nobles were very less. So they gave virtually a lot of powers to the Banias. The only thing the British insisted on were very high rents from the newly minted Zamindars. The Zamindar in turn used the powers of personal fiefdom to give loans at very high interest rates when the poor were unable to pay the interest rate, they would take the land while at the same time slavery was forced on both men and women, many a time rapes and affairs. While there have been many records shedding light on it, don t think it could be any more powerful as enacted and shared by Shabana Azmi in Ankur:the Seedling. Another prominent grouping was formed around the same time was the Bhadralok. Now as shared Bhadralok while having all the amenities of belonging to the community, turned a blind eye to the excesses being done by the Zamindars. How much they played a hand in the decimation of Bengal has been a matter of debate, but they did have a hand, that much is not contested.

The Rise of Stock Exchanges Sadly and interestingly, many people believe and continue to believe that stock exchanges is recent phenomena. The first stock exchange though was the Calcutta Stock Exchange rather than the Bombay Stock Exchange. How valuable was Calcutta to the Britishers in its early years can be gauged from the fact that at one time it was made the capital of India in 1772 . In fact, after the Grand Trunk Road (on which there had been even Train names in both countries) x number of books have been written of the trade between Calcutta and Peshawar (Now in Pakistan). And it was not just limited to trade but also cultural give-and-take between the two centers. Even today, if you look at YT (Youtube) and look up some interviews of old people, you find many interesting anecdotes of people sharing both culture and trade.

The problem of the 60 s and rise of BSE
After India became independent and the Constitutional debates happened, the new elites understood that there cannot be two power centers that could govern India. On one hand, were the politicians who had come to power on the back of the popular vote, the other was the Zamindars, who more often than not had abused their powers which resulted in widespread poverty. The Britishers are to blame, but so do the middlemen as they became willing enablers to the same system of oppression. Hence, you had the 1951 amendment to the Constitution and the 1956 Zamindari Abolition Act. In fact, you can find much more of an in-depth article both about Zamindars and their final abolition here. Now once Zamindari was gone, there was nothing to replace it with. The Zamindars ousted of their old roles turned and tried to become Industrialists. The problem was that the poor and the downtrodden had already had experiences with the Zamindars. Also, some Industrialists from North and West also came to Bengal but they had no understanding of either the language or the cultural understanding of what had happened in Bengal. And notice that I have not talked about both the famines and the floods that wrecked Bengal since time immemorial and some of the ones which got etched on soul of Bengal and has marks even today  The psyche of the Bengali and the Bhadralok has gone through enormous shifts. I have met quite a few and do see the guilt they feel. If one wonders as to how socialist parties are able to hold power in Bengal, look no further than Tarikh which tells and shares with you that even today how many Bengalis still feel somewhat lost.

The Rise of BSE Now, while Kolkata Stock Exchange had been going down, for multiple reasons other than listed above. From the 1950s onwards Jawaharlal Nehru had this idea of 5-year plans, borrowed from socialist countries such as Russia, China etc. His vision and ambition for the newly minted Indian state were huge, while at the same time he understood we were poor. The loot by East India Company and the Britishers and on top of that the division of wealth with Pakistan even though the majority of Muslims chose and remained with India. Travel on Indian Railways was a risky affair. My grandfather had shared numerous tales where he used to fill money in socks and put the socks on in boots when going between either Delhi Kolkata or Pune Kolkata. Also, as the Capital became Delhi, it unofficially was for many years, the transparency from Kolkata-based firms became less. So many Kolkata firms either mismanaged and shut down while Maharashtra, my own state, saw a huge boon in Industrialization as well as farming. From the 1960s to the 1990s there were many booms and busts in the stock exchanges but most were manageable.

While the 60s began on a good note as Goa was finally freed from the Portuguese army and influence, the 1962 war with the Chinese made many a soul question where we went wrong. Jawaharlal Nehru went all over the world to ask for help but had to return home empty-handed. Bollywood showed a world of bell-bottoms and cars and whatnot, while the majority were still trying to figure out how to put two square meals on the table. India suffered one of the worst famines in those times. People had to ration food. Families made do with either one meal or just roti (flatbread) rather than rice. In Bengal, things were much more severe. There were huge milk shortages, so Bengalis were told to cut down on sweets. This enraged the Bangalis as nothing else could. Note If one wants to read how bad Indians felt at that time, all one has to read is V.S. Naipaul s An Area of darkness . This was also the time when quite a few Indians took their first step out of India. While Air India had just started, the fares were prohibitive. Those who were not well off, either worked on ships or went via passenger or cargo ships to Dubai/Qatar middle-east. Some went to Russia and some even to States. While today s migr s want to settle in the west forever and have their children and grandchildren grow up in the West, in the 1960s and 70s the idea was far different. The main purpose for a vast majority was to get jobs and whatnot, save maximum money and send it back to India as a remittance. The idea was to make enough money in 3-5-10 years, come back to India, and then lead a comfortable life. Sadly, there has hardly been any academic work done in India, at least to my knowledge to document the sacrifices done by Indians in search of jobs, life, purpose, etc. in the 1960s and 1970s. The 1970s was also when alternative cinema started its journey with people like Smita Patil, Naseeruddin Shah who portrayed people s struggles on-screen. Most of them didn t have commercial success because the movies and the stories were bleak. While the acting was superb, most Indians loved to be captured by fights, car-chases, and whatnot rather than the deary existence which they had. And the alt cinema forced them to look into the mirror, which was frowned upon both by the masses and the classes. So cinema which could have been a wake-up call for a lot of Indians failed. One of the most notable works of that decade, at least to me, was Manthan. 1961 was also marked by the launch of Economic Times and Financial Express which tells that there was some appetite for financial news and understanding. The 1970s was also a very turbulent time in the corporate sector and stock exchanges. Again, the companies which were listed were run by the very well-off and many of them had been abroad. At the same time, you had fly-by-night operators. One of the happenings which started in this decade is you had corporate wars and hostile takeovers, quite a few of them of which could well have a Web series or two of their own. This was also a decade marked by huge labor unrest, which again changed the face of Bombay/Mumbai. From the 1950s till the 1970s, Bombay was known for its mills. So large migrant communities from all over India came to Bombay to become the next Bollywood star and if that didn t happen, they would get jobs in the mills. Bombay/Mumbai has/had this unique feature that somehow you will make money to make ends meet. Of course, with the pandemic, even that has gone for a toss. Labor unrest was a defining character of that decade. Three movies, Kaala Patthar, Kalyug, and Ankush give a broad outlook of what happened in that decade. One thing which is present and omnipresent then and now is how time and time again we lost our demographic dividend. Again there was an exodus of young people who ventured out to seek fortunes elsewhere. The 1970s and 80s were also famous for the license Raj which they bought in. Just like the Soviets, there were waiting periods for everything. A telephone line meant waiting for things anywhere from 4 to 8 years. In 1987, when we applied and got a phone within 2-3 months, most of my relatives both from my mother and father s side could not believe we paid 0 to get a telephone line. We did pay the telephone guy INR 10/- which was a somewhat princely sum when he was installing it, even then they could not believe it as in Northern India, you couldn t get a phone line even if your number had come. You had to pay anywhere from INR 500/1000 or more to get a line. This was BSNL and to reiterate there were no alternatives at that time.

The 1990s and the Harshad Mehta Scam The 90s was when I was a teenager. You do all the stupid things for love, lust, whatever. That is also the time you are introduced really to the world of money. During my time, there were only three choices, Sciences, Commerce, and Arts. If History were your favorite subject then you would take Arts and if it was not, and you were not studious, then you would up commerce. This is how careers were chosen. So I enrolled in Commerce. Due to my grandfather and family on my mother s side interested in stocks both as a saving and compounding tool, I was able to see Pune Stock Exchange in action one day. The only thing I remember that day is people shouting loudly with various chits. I had no idea that deals of maybe thousands or even lakhs. The Pune Stock Exchange had been newly minted. I also participated in a couple of mock stock exchanges and came to understand that one has to be aggressive in order to win. You had to be really loud to be heard over others, you could not afford to be shy. Also, spread your risks. Sadly, nothing about the stock markets was there in the syllabus. 1991 was also when we saw the Iraq war, the balance of payments crisis in India, and didn t know that the Harshad Mehta scam was around the corner. Most of the scams in India have been caught because the person who was doing it was flashy. And this was the reason that even he was caught as Ms. Sucheta Dalal, a young beat reporter from Indian Express who had been covering Indian stock market. Many of her articles were thought-provoking. Now, a brief understanding is required to know before we actually get to the scam. Because of the 1991 balance of payments crisis, IMF rescued India on the condition that India throws its market open. In the 1980s itself, Rajeev Gandhi had wanted to partially make India open but both politicians and Industrialists advised him not to do the same, we are/were not ready. On 21st May 1991, Rajeev Gandhi was assassinated by the LTTE. A month later, due to the sympathy vote, the Narsimha Rao Govt. took power. While for most new Governments there is usually a honeymoon period lasting 6 months or so till they get settled in their roles before people start asking tough questions. It was not to be for this Govt. Immediately, The problem had been building for a few years. Although, in many ways, our economy was better than it is today. The only thing India didn t do well at that time was managing foreign exchange. As only a few Indians had both the money and the opportunity to go abroad and need for electronics was limited. One of the biggest imports of the time then and still today is Energy, Oil. While today it is Oil/Gas and electronics, at that time it was only OIl. The Oil import bill was ballooning while exports were more or less stagnant and mostly comprised of raw materials rather than finished products. Even today, it is largely this, one of the biggest Industrialists in India Ambani exports gas/oil while Adani exports coal. Anyways, the deficit was large enough to trigger a payment crisis. And Narsimha Rao had to throw open the Indian market almost overnight. Some changes became quickly apparent, while others took a long time to come.

Satellite Television and Entry of Foreign Banks Almost overnight, from 1 channel we became multi-channel. Star TV (Rupert Murdoch) bought us Bold and Beautiful, while CNN broadcasted the Iraq War. It was unbelievable for us that we were getting reports of what had happened 24-48 hours earlier. Fortunately or unfortunately, I was still very much a teenager to understand the import of what was happening. Even in my college, except for one or two-person, it wasn t a topic for debate or talk or even the economy. We were basically somehow cocooned in our own little world. But this was not the case for the rest of India and especially banks. The entry of foreign banks was a rude shock to Indian banks. The foreign banks were bringing both technology and sophistication in their offerings, and Indian Banks needed and wanted fast money to show hefty profits. Demand for credit wasn t much, at least nowhere the level it today is. At the same time, default on credit was nowhere high as today is. But that will require its own space and article. To quench the thirst for hefty profits by banks, Enter Harshad Mehta. At that point in time, banks were not permitted at all to invest in the securities/share market. They could only buy Government securities or bonds which had a coupon rate of say 8-10% which was nowhere enough to satisfy the need for hefty profits as desired by Indian banks. On top of it, that cash was blocked for a long time. Most of these Government bonds had anywhere between 10-20 year maturity date and some even longer. Now, one loophole in that was that the banks themselves could not buy these securities. They had to approach a registered broker of the share market who will do these transactions on their behalf. Here is where Mr. Mehta played his game. He shared both legal and illegal ways in which both the bank and he would prosper. While banking at one time was thought to be conservative and somewhat cautious, either because they were too afraid that Western private banks will take that pie or whatever their reasons might be, they agreed to his antics. To play the game, Harshad Mehta needed lots of cash, which the banks provided him in the guise of buying securities that were never bought, but the amounts were transferred to his account. He actively traded stocks, at the same time made a group, and also made the rumor mill work to his benefit. The share market is largely a reactionary market. It operates on patience, news, and rumor-mill. The effect of his shenanigans was that the price of a stock that was trending at say INR 200 reached the stratospheric height of INR 9000/- without any change in the fundamentals or outlook of the stock. His thirst didn t remain restricted to stocks but also ventured into the unglamorous world of Govt. securities where he started trading even in them in large quantities. In order to attract new clients, he coveted a fancy lifestyle. The fancy lifestyle was what caught the eye of Sucheta Dalal, and she started investigating the deals he was doing. Being a reporter, she had the advantage of getting many doors to open and get information that otherwise would be under lock and key. On 23rd April 1992, Sucheta Dalal broke the scam.

The Impact The impact was almost like a shock to the markets. Even today, it can be counted as one of the biggest scams in the Indian market if you adjust it for inflation. I haven t revealed much of the scam and what happened, simply because Sucheta Dalal and Debasis Basu wrote The Scam for that purpose. How do I shorten a story and experience which has been roughly written in 300 odd pages in one or two paragraphs, it is simply impossible. The impact though was severe. The Indian stock market became a bear market for two years. Sucheta Dalal was kicked out/made to resign out of Indian Express. The thing is simple, all newspapers survive on readership and advertisements with advertisements. Companies who were having a golden run, whether justified or not, on the bourses/Stock Exchange. For many companies, having a good number on the stock exchange was better than the company fundamentals. There was supposed to be a speedy fast-track court setup for Financial crimes, but it worked only for the Harshad Mehta case and still took over 5 years. It led to the creation of NSE (National Stock Exchange). It also led to the creation of SEBI, perhaps one of the most powerful regulators, giving it a wide range of powers and remit but on the ground more often that proved to be no more than a glorified postman. And the few times it used, it used on the wrong people and people had to go to courts to get justice. But then this is not about SEBI nor is this blog post about NSE. I have anyways shared about Absolute power above, so will not repeat the link here. The Anecdotal impact was widespread. Our own family broker took the extreme step. For my grandfather on the mother s side, he was like the second son. The news of his suicide devastated my grandfather quite a bit, which we realized much later when he was diagnosed with Alzheimer s. Our family stockbroker had been punting, taking lots of cash from the market at very high rates, betting on stocks wildly as the stock market was reaching for the stars when the market crashed, he was insolvent. How the family survived is a tale in itself. They had just got married just a few years ago and had a cute boy and girl soon after. While today, both are grown-up, at that time what the wife faced only she knows. There were also quite a few shareholders who also took the extreme step. The stock markets in those days were largely based on trust and even today is unless you are into day-trading. So there was always some money left on the table for the share/stockbroker which would be squared off in the next deal/transaction where again you will leave something. My grandfather once thought of going over and meeting them, and we went to the lane where their house is, seeing the line of people who had come for recovery of loans, we turned back with a heavy heart. There was another taboo that kinda got broken that day. The taboo was that the stock market is open to scams. From 1992 to 2021 has been a cycle of scams. Even now, today, the stock market is at unnatural highs. We know for sure that a lot of hot money is rolling around, a lot of American pension funds etc. Till it will work, it will work, some news something and that money will be moved out. Who will be left handing the can, the Indian investors? A Few days back, Ambani writes about Adani. Now while the facts shared are correct, is Adani the only one, the only company to have a small free float in the market. There probably are more than 1/4th or 1/3rd of well-respected companies who may have a similar configuration, the only problem is it is difficult to know who the proxies are. Now if I were to reflect and compare this either with the 1960s or even the 1990s I don t find much difference apart from the fact that the proxy is sitting in Mauritius. At the same time, today you can speculate on almost anything. Whether it is stocks, commodities, derivatives, foreign exchange, cricket matches etc. the list is endless. Since 2014, the rise in speculation rather than investment has been dramatic, almost stratospheric. Sadly, there are no studies or even attempts made to document this. How much official and unofficial speculation is there in the market nobody knows. Money markets have become both fluid and non-transparent. In theory, you have all sorts of regulators, but it is still very much like the Wild West. One thing to note that even Income tax had to change and bring it provisions to account for speculative income.So, starting from being totally illegitimate, it has become kind of legal and is part of Income Tax. And if speculation is not wrong, why not make Indian cricket officially a speculative event, that will be honest and GOI will get part of the proceeds.

Conclusion I wish there was some positive conclusion I could drive, but sadly there is not. Just today read two articles about the ongoing environmental issues in Himachal Pradesh. As I had shared even earlier, the last time I visited those places in 2011, and even at that time I was devastated to see the kind of construction going on. Jogiwara Road which they showed used to be flat single ground/first floor dwellings, most of which were restaurants and whatnot. I had seen the water issues both in Himachal and UT (Uttarakhand) back then and this is when they made huge dams. In U.S. they are removing dams and here we want more dams

26 June 2021

Enrico Zini: Ansible conditionals in Transilience

This is part of a series of posts on ideas for an ansible-like provisioning system, implemented in Transilience. I thought a lot of what I managed to do so far with Transilience would be impossible, but then here I am. How about Ansible conditionals? Those must be impossible, right? Let's give it a try. A quick recon of Ansible sources Looking into Ansible's sources, when expressions are lists of strings AND-ed together. The expressions are Jinja2 expressions that Ansible pastes into a mini-template, renders, and checks the string that comes out. A quick recon of Jinja2 Jinja2 has a convenient function (jinja2.Environment.compile_expression) that compiles a template snippet into a Python function. It can also parse a template into an AST that can be inspected in various ways. Evaluating Ansible conditionals in Python Environment.compile_expression seems to really do precisely what we need for this, straight out of the box. There is an issue with the concept of "defined": for Ansible it seems to mean "the variable is present in the template context". In Transilience instead, all variables are fields in the Role dataclass, and can be None when not set. This means that we need to remove variables that are set to None before passing the parameters to the compiled Jinjae expression:
class Conditional:
    """
    An Ansible conditional expression
    """
    def __init__(self, engine: template.Engine, body: str):
        # Original unparsed expression
        self.body: str = body
        # Expression compiled to a callable
        self.expression: Callable = engine.env.compile_expression(body)
    def evaluate(self, ctx: Dict[str, Any]):
        ctx =  name: val for name, val in ctx.items() if val is not None 
        return self.expression(**ctx)
Generating Python code Transilience does not only support running Ansible roles, but also converting them to Python code. I can keep this up by traversing the Jinja2 AST generating Python expressions. The code is straightforward enough that I can throw in a bit of pattern matching to make some expressions more idiomatic for Python:
class Conditional:
    def __init__(self, engine: template.Engine, body: str):
    ...
        parser = jinja2.parser.Parser(engine.env, body, state='variable')
        self.jinja2_ast: nodes.Node = parser.parse_expression()
    def get_python_code(self) -> str:
        return to_python_code(self.jinja2_ast
def to_python_code(node: nodes.Node) -> str:
    if isinstance(node, nodes.Name):
        if node.ctx == "load":
            return f"self. node.name "
        else:
            raise NotImplementedError(f"jinja2 Name nodes with ctx= node.ctx!r  are not supported:  node!r ")
    elif isinstance(node, nodes.Test):
        if node.name == "defined":
            return f" to_python_code(node.node)  is not None"
        elif node.name == "undefined":
            return f" to_python_code(node.node)  is None"
        else:
            raise NotImplementedError(f"jinja2 Test nodes with name= node.name!r  are not supported:  node!r ")
    elif isinstance(node, nodes.Not):
        if isinstance(node.node, nodes.Test):
            # Special case match well-known structures for more idiomatic Python
            if node.node.name == "defined":
                return f" to_python_code(node.node.node)  is None"
            elif node.node.name == "undefined":
                return f" to_python_code(node.node.node)  is not None"
        elif isinstance(node.node, nodes.Name):
            return f"not  to_python_code(node.node) "
        return f"not ( to_python_code(node.node) )"
    elif isinstance(node, nodes.Or):
        return f"( to_python_code(node.left)  or  to_python_code(node.right) )"
    elif isinstance(node, nodes.And):
        return f"( to_python_code(node.left)  and  to_python_code(node.right) )"
    else:
        raise NotImplementedError(f"jinja2  node.__class__  nodes are not supported:  node!r ")
Scanning for variables Lastly, I can implement scanning conditionals for variable references to add as fields to the Role dataclass:
class FindVars(jinja2.visitor.NodeVisitor):
    def __init__(self):
        self.found: Set[str] = set()
    def visit_Name(self, node):
        if node.ctx == "load":
            self.found.add(node.name)
class Conditional:
    ...
    def list_role_vars(self) -> Sequence[str]:
        fv = FindVars()
        fv.visit(self.jinja2_ast)
        return fv.found
The result in action Take this simple Ansible task:
---
 - name: Example task
   file:
      state: touch
      path: /tmp/test
   when: (is_test is defined and is_test) or debug is defined
Run it through ./provision --ansible-to-python test and you get:
from __future__ import annotations
from typing import Any
from transilience import role
from transilience.actions import builtin, facts
@role.with_facts([facts.Platform])
class Role(role.Role):
    # Role variables used by templates
    debug: Any = None
    is_test: Any = None
    def all_facts_available(self):
        if ((self.is_test is not None and self.is_test)
                or self.debug is not None):
            self.add(
                builtin.file(path='/tmp/test', state='touch'),
                name='Example task')
Besides one harmless set of parentheses too much, what I wasn't sure would be possible is there, right there, staring at me with a mischievous grin. Next: Building a Transilience playbook in a zipapp.

25 June 2021

Enrico Zini: Parsing YAML

This is part of a series of posts on ideas for an Ansible-like provisioning system, implemented in Transilience. The time has come for me to try and prototype if it's possible to load some Transilience roles from Ansible's YAML instead of Python. The data models of Transilience and Ansible are not exactly the same. Some of the differences that come to mind: To simplify the work, I'll start from loading a single role out of Ansible, not an entire playbook. TL;DR: scroll to the bottom of the post for the conclusion! Loading tasks The first problem of loading an Ansible task is to figure out which of the keys is the module name. I have so far failed to find precise reference documentation about what keyboards are used to define a task, so I'm going by guesswork, and if needed a look at Ansible's sources. My first attempt goes by excluding all known non-module keywords:
        candidates = []
        for key in task_info.keys():
            if key in ("name", "args", "notify"):
                continue
            candidates.append(key)
        if len(candidates) != 1:
            raise RoleNotLoadedError(f"could not find a known module in task  task_info!r ")
        modname = candidates[0]
        if modname.startswith("ansible.builtin."):
            name = modname[16:]
        else:
            name = modname
This means that Ansible keywords like when or with will break the parsing, and it's fine since they are not supported yet. args seems to carry arguments to the module, when the module main argument is not a dict, as may happen at least with the command module. Task parameters One can do all sorts of chaotic things to pass parameters to Ansible tasks: for example string lists can be lists of strings or strings with comma-separated lists, and they can be preprocesed via Jinja2 templating, and they can be complex data structures that might contain strings that need Jinja2 preprocessing. I ended up mapping the behaviours I encountered in an AST-like class hierarchy which includes recursive complex structures. Variables Variables look hard: Ansible has a big free messy cauldron of global variables, and Transilience needs a predefined list of per-role variables. However, variables are mainly used inside Jinja2 templates, and Jinja2 can parse to an Abstract Syntax Tree and has useful methods to examine its AST. Using that, I managed with resonable effort to scan an Ansible role and generate a list of all the variables it uses! I can then use that list, filter out facts-specific names like ansible_domain, and use them to add variable definition to the Transilience roles. That is exciting! Handlers Before loading tasks, I load handlers as one-action roles, and index them by name. When an Ansible task notifies a handler, I can then loop up by name the roles I generated in the earlier pass, and I have all that I need. Parsed Abstract Syntax Tree Most of the results of all this parsing started looking like an AST, so I changed the rest of the prototype to generate an AST. This means that, for a well defined subset of nsible's YAML, there exists now a tool that is able to parse it into an AST and raeson with it. Transilience's playbooks gained a --ansible-to-ast option to parse an Ansible role and dump the resulting AST as JSON:
$ ./provision --help
usage: provision [-h] [-v] [--debug] [-C] [--ansible-to-python role]
                 [--ansible-to-ast role]
Provision my VPS
optional arguments:
[...]
  -C, --check           do not perform changes, but check if changes would be
                        needed
  --ansible-to-ast role
                        print the AST of the given Ansible role as understood
                        by Transilience
The result is extremely verbose, since every parameter is itself a node in the tree, but I find it interesting. Here is, for example, a node for an Ansible task which has a templated parameter:
     
      "node": "task",
      "action": "builtin.blockinfile",
      "parameters":  
        "path":  
          "node": "parameter",
          "type": "scalar",
          "value": "/etc/aliases"
         ,
        "block":  
          "node": "parameter",
          "type": "template_string",
          "value": "root:  postmaster \n % for name, dest in aliases.items() % \n name :  dest \n % endfor % \n"
         
       ,
      "ansible_yaml":  
        "name": "configure /etc/aliases",
        "blockinfile":  ,
        "notify": "reread /etc/aliases"
       ,
      "notify": [
        "RereadEtcAliases"
      ]
     ,
Here's a node for an Ansible template task converted to Transilience's model:
     
      "node": "task",
      "action": "builtin.copy",
      "parameters":  
        "dest":  
          "node": "parameter",
          "type": "scalar",
          "value": "/etc/dovecot/local.conf"
         ,
        "src":  
          "node": "parameter",
          "type": "template_path",
          "value": "dovecot.conf"
         
       ,
      "ansible_yaml":  
        "name": "configure dovecot",
        "template":  ,
        "notify": "restart dovecot"
       ,
      "notify": [
        "RestartDovecot"
      ]
     ,
Executing The first iteration of prototype code for executing parsed Ansible roles is a little execise in closures and dynamically generated types:
    def get_role_class(self) -> Type[Role]:
        # If we have handlers, instantiate role classes for them
        handler_classes =  
        for name, ansible_role in self.handlers.items():
            handler_classes[name] = ansible_role.get_role_class()
        # Create all the functions to start actions in the role
        start_funcs = []
        for task in self.tasks:
            start_funcs.append(task.get_start_func(handlers=handler_classes))
        # Function that calls all the 'Action start' functions
        def role_main(self):
            for func in start_funcs:
                func(self)
        if self.uses_facts:
            role_cls = type(self.name, (Role,),  
                "start": lambda host: None,
                "all_facts_available": role_main
             )
            role_cls = dataclass(role_cls)
            role_cls = with_facts(facts.Platform)(role_cls)
        else:
            role_cls = type(self.name, (Role,),  
                "start": role_main
             )
            role_cls = dataclass(role_cls)
        return role_cls
Now that the parsed Ansible role is a proper AST, I'm considering redesigning that using a generic Role class that works as an AST interpreter. Generating Python I maintain a library that can turn an invoice into Python code, and I have a convenient AST. I can't not generate Python code out of an Ansible role!
$ ./provision --help
usage: provision [-h] [-v] [--debug] [-C] [--ansible-to-python role]
                 [--ansible-to-ast role]
Provision my VPS
optional arguments:
[...]
  --ansible-to-python role
                        print the given Ansible role as Transilience Python
                        code
  --ansible-to-ast role
                        print the AST of the given Ansible role as understood
                        by Transilience
And will you look at this annotated extract:
$ ./provision --ansible-to-python mailserver
from __future__ import annotations
from typing import Any
from transilience import role
from transilience.actions import builtin, facts
# Role classes generated from Ansible handlers!
class ReloadPostfix(role.Role):
    def start(self):
        self.add(
            builtin.systemd(unit='postfix', state='reloaded'),
            name='reload postfix')
class RestartDovecot(role.Role):
    def start(self):
        self.add(
            builtin.systemd(unit='dovecot', state='restarted'),
            name='restart dovecot')
# The role, including a standard set of facts
@role.with_facts([facts.Platform])
class Role(role.Role):
    # These are the variables used by Jinja2 template files and strings. I need
    # to use Any, since Ansible variables are not typed
    aliases: Any = None
    myhostname: Any = None
    postmaster: Any = None
    virtual_domains: Any = None
    def all_facts_available(self):
        ...
        # A Jinja2 string inside a string list!
        self.add(
            builtin.command(
                argv=[
                    'certbot', 'certonly', '-d',
                    self.render_string('mail. ansible_domain '), '-n',
                    '--apache'
                ],
                creates=self.render_string(
                    '/etc/letsencrypt/live/mail. ansible_domain /fullchain.pem'
                )),
            name='obtain mail.* letsencrypt certificate')
        # A converted template task!
        self.add(
            builtin.copy(
                dest='/etc/dovecot/local.conf',
                src=self.render_file('templates/dovecot.conf')),
            name='configure dovecot',
            # Notify referring to the corresponding Role class!
            notify=RestartDovecot)
        # Referencing a variable collected from a fact!
        self.add(
            builtin.copy(dest='/etc/mailname', content=self.ansible_domain),
            name='configure /etc/mailname',
            notify=ReloadPostfix)
        ...
Conclusion Transilience can load a (growing) subset of Ansible syntax, one role at a time, which contains: The role loader in Transilience now looks for YAML when it does not find a Python module, and runs it pipelined and fast! There is code to generate Python code from an Ansible module: you can take an Ansible role, convert it to Python, and then work on it to add more complex logic, or clean it up for adding it to a library of reusable roles! Next: Ansible conditionals

20 June 2021

Mike Gabriel: BBB Packaging for Debian, a short Heads-Up

Over the past days, I have received tons of positive feedback on my previous blog post about forming the Debian BBB Packaging Team [1]. Feedback arrived via mail, IRC, [matrix] and Mastodon. Awesome. Thanks for sharing your thoughts, folks... Therefore, here comes a short ... Heads-Up on the current Ongoings ... around packaging BigBlueButton for Debian: Credits light+love
Mike Gabriel

[1] https://sunweavers.net/blog/node/133
[2] https://bigbluebutton.org/event-page/
[3] https://docs.google.com/document/d/1kpYJxYFVuWhB84bB73kmAQoGIS59ari1_hn2...

Next.

Previous.